Goto

Collaborating Authors

 Dubuque County


DEFAME: Dynamic Evidence-based FAct-checking with Multimodal Experts

Braun, Tobias, Rothermel, Mark, Rohrbach, Marcus, Rohrbach, Anna

arXiv.org Artificial Intelligence

The proliferation of disinformation presents a growing threat to societal trust and democracy, necessitating robust and scalable Fact-Checking systems. In this work, we present Dynamic Evidence-based FAct-checking with Multimodal Experts (DEFAME), a modular, zero-shot MLLM pipeline for open-domain, text-image claim verification. DEFAME frames the problem of fact-checking as a six-stage process, dynamically deciding about the usage of external tools for the retrieval of textual and visual evidence. In addition to the claim's veracity, DEFAME returns a justification accompanied by a comprehensive, multimodal fact-checking report. While most alternatives either focus on sub-tasks of fact-checking, lack explainability or are limited to text-only inputs, DEFAME solves the problem of fact-checking end-to-end, including claims with images or those that require visual evidence. Evaluation on the popular benchmarks VERITE, AVeriTeC, and MOCHEG shows that DEFAME surpasses all previous methods, establishing it as the new state-of-the-art fact-checking system.


Estimate the building height at a 10-meter resolution based on Sentinel data

Yan, Xin

arXiv.org Artificial Intelligence

Building height is an important indicator for scientific research and practical application. However, building height products with a high spatial resolution (10m) are still very scarce. To meet the needs of high-resolution building height estimation models, this study established a set of spatial-spectral-temporal feature databases, combining SAR data provided by Sentinel-1, optical data provided by Sentinel-2, and shape data provided by building footprints. The statistical indicators on the time scale are extracted to form a rich database of 160 features. This study combined with permutation feature importance, Shapley Additive Explanations, and Random Forest variable importance, and the final stable features are obtained through an expert scoring system. This study took 12 large, medium, and small cities in the United States as the training data. It used moving windows to aggregate the pixels to solve the impact of SAR image displacement and building shadows. This study built a building height model based on a random forest model and compared three model ensemble methods of bagging, boosting, and stacking. To evaluate the accuracy of the prediction results, this study collected Lidar data in the test area, and the evaluation results showed that its R-Square reached 0.78, which can prove that the building height can be obtained effectively. The fast production of high-resolution building height data can support large-scale scientific research and application in many fields.


Robotics Engineer - Machine Vision - AI Jobs

#artificialintelligence

RFA Engineering (www.rfamec.com) is seeking talent in the field of Robotics, Perception, Vision Processing and Machine Learning to architect, develop and integrate new intelligent features into the next generation of agricultural and off-highway equipment. From this mid-western location, you could work as part of a global team of world-class engineers and researchers that are leading the implementation of autonomous and semi-autonomous technology in their industry. You will be working with a high-velocity team of multi-disciplined engineers, developers and architects that are developing new applications using state-of-the-art technologies including 3D vision systems, machine learning, sensor fusion technology, FPGA's and GPU's. This is an excellent growth opportunity for anyone interested in these emerging technologies. Our primary focus is product development of off highway equipment including agricultural, construction, mining, recreational, industrial, and special machines.


Top 10 global manufacturers using 5G

#artificialintelligence

To further explore the intersection of 5G and manufacturing, register for the 5G Manufacturing Forum. Global manufactuers are starting to adopt 5G to improve manufacturing processes. Low latency and high reliability are needed to support critical applications in the manufacturing field. Several top manufacturers are already taking advantage of 5G implementation to improve operations in different industrial environments. Here we briefly describe some implementations by large manufacturers globally.


Knowledge-driven Natural Language Understanding of English Text and its Applications

Basu, Kinjal, Varanasi, Sarat, Shakerin, Farhad, Arias, Joaquin, Gupta, Gopal

arXiv.org Artificial Intelligence

Understanding the meaning of a text is a fundamental challenge of natural language understanding (NLU) research. An ideal NLU system should process a language in a way that is not exclusive to a single task or a dataset. Keeping this in mind, we have introduced a novel knowledge driven semantic representation approach for English text. By leveraging the VerbNet lexicon, we are able to map syntax tree of the text to its commonsense meaning represented using basic knowledge primitives. The general purpose knowledge represented from our approach can be used to build any reasoning based NLU system that can also provide justification. We applied this approach to construct two NLU applications that we present here: SQuARE (Semantic-based Question Answering and Reasoning Engine) and StaCACK (Stateful Conversational Agent using Commonsense Knowledge). Both these systems work by "truly understanding" the natural language text they process and both provide natural language explanations for their responses while maintaining high accuracy.


The 2020s Political Economy of Machine Translation

Weber, Steven

arXiv.org Artificial Intelligence

This paper explores the hypothesis that the diversity of human languages, right now a barrier to interoperability in communication and trade, will become significantly less of a barrier as machine translation technologies are deployed over the next several years.But this new boundary-breaking technology does not reduce all boundaries equally, and it creates new challenges for the distribution of ideas and thus for innovation and economic growth.


SQuARE: Semantics-based Question Answering and Reasoning Engine

Basu, Kinjal, Varanasi, Sarat Chandra, Shakerin, Farhad, Gupta, Gopal

arXiv.org Artificial Intelligence

Understanding the meaning of a text is a fundamental challenge of natural language understanding (NLU) and from its early days, it has received significant attention through question answering (QA) tasks. We introduce a general semantics-based framework for natural language QA and also describe the SQuARE system, an application of this framework. The framework is based on the denotational semantics approach widely used in programming language research. In our framework, valuation function maps syntax tree of the text to its commonsense meaning represented using basic knowledge primitives (the semantic algebra) coded using answer set programming (ASP). We illustrate an application of this framework by using VerbNet primitives as our semantic algebra and a novel algorithm based on partial tree matching that generates an answer set program that represents the knowledge in the text. A question posed against that text is converted into an ASP query using the same framework and executed using the s(CASP) goal-directed ASP system. Our approach is based purely on (commonsense) reasoning. SQuARE achieves 100% accuracy on all the five datasets of bAbI QA tasks that we have tested. The significance of our work is that, unlike other machine learning based approaches, ours is based on "understanding" the text and does not require any training. SQuARE can also generate an explanation for an answer while maintaining high accuracy.


Deep Reinforcement Learning for Autonomous Driving: A Survey

Kiran, B Ravi, Sobh, Ibrahim, Talpaert, Victor, Mannion, Patrick, Sallab, Ahmad A. Al, Yogamani, Senthil, Pérez, Patrick

arXiv.org Artificial Intelligence

With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms, provides a taxonomy of automated driving tasks where (D)RL methods have been employed, highlights the key challenges algorithmically as well as in terms of deployment of real world autonomous driving agents, the role of simulators in training agents, and finally methods to evaluate, test and robustifying existing solutions in RL and imitation learning.


Special Delivery: With U.S. Post Office on Board, NVIDIA to Enable AI Deployment, NVIDIA's Ian Buck Says The Official NVIDIA Blog

#artificialintelligence

Kicking off the Washington edition of our GPU Technology Conference, Buck, NVIDIA's VP for accelerated computing, detailed a new generation of technologies that will help companies put modern AI to work. Buck also announced that the United States Postal Service -- the world's largest delivery service, with 146 billion pieces of mail processed and delivered annually -- is adopting end-to-end AI technology from NVIDIA. "The challenge is how do we take AI from innovation to actually applying AI," Buck told an audience of more than 3,500 developers, CIOs and federal employees at the three-day GTC DC. "Our challenge, NVIDIA's challenge, and my challenge is'How can I bring AI to industries and activate it.'" Over the course of his hour-long talk, Buck explained how modern AI is trained and deployed, and described how NVIDIA is adapting AI for the automotive, healthcare, robotics, and 5G industries, among others. The U.S. Postal Service offers a glimpse at what's possible. Buck said the U.S. Postal Service will roll out a deep learning solution based on NVIDIA EGX to 200 processing facilities that should be operational in 2020.


Feature-Cost Sensitive Learning with Submodular Trees of Classifiers

Kusner, Matt (Washington University in St. Louis) | Chen, Wenlin (Washington University in St. Louis) | Zhou, Quan (Tsinghua University) | Xu, Zhixiang (Eddie) (Washington University in St. Louis) | Weinberger, Kilian (Washington University in St. Louis) | Chen, Yixin (Washington University in St. Louis)

AAAI Conferences

During the past decade, machine learning algorithms have become commonplace in large-scale real-world industrial applications. In these settings, the computation time to train and test machine learning algorithms is a key consideration. At training-time the algorithms must scale to very large data set sizes.At testing-time, the cost of feature extraction can dominate the CPU runtime. Recently, a promising method was proposed to account for the feature extraction cost at testing time, called Cost-sensitive Tree of Classifiers (CSTC). Although the CSTC problem is NP-hard, the authors suggest an approximation through a mixed-norm relaxation across many classifiers. This relaxation is slow to train and requires involved optimization hyperparameter tuning. We propose a different relaxation using approximate submodularity, called Approximately Submodular Tree of Classifiers (ASTC). ASTC is much simpler to implement, yields equivalent results but requires no optimization hyperparameter tuning and is up to two orders of magnitude faster to train.